1,066 research outputs found

    New horizons in the study of child language acquisition

    Get PDF
    URL to paper on conference site.Naturalistic longitudinal recordings of child development promise to reveal fresh perspectives on fundamental questions of language acquisition. In a pilot effort, we have recorded 230,000 hours of audio-video recordings spanning the first three years of one child's life at home. To study a corpus of this scale and richness, current methods of developmental cognitive science are inadequate. We are developing new methods for data analysis and interpretation that combine pattern recognition algorithms with interactive user interfaces and data visualization. Preliminary speech analysis reveals surprising levels of linguistic fine-tuning by caregivers that may provide crucial support for word learning. Ongoing analyses of the corpus aim to model detailed aspects of the child's language development as a function of learning mechanisms combined with lifetime experience. Plans to collect similar corpora from more children based on a transportable recording system are underway.National Science Foundation (U.S.)MIT Center for Future BankingMassachusetts Institute of Technology. Media LaboratoryUnited States. Office of Naval ResearchUnited States. Dept. of Defens

    Fast transcription of unstructured audio recordings

    Get PDF
    URL to conference session list. Title is under heading: Wed-Ses1-P1: Phonetics, Phonology, cross-language comparisons, pathologyWe introduce a new method for human-machine collaborative speech transcription that is significantly faster than existing transcription methods. In this approach, automatic audio processing algorithms are used to robustly detect speech in audio recordings and split speech into short, easy to transcribe segments. Sequences of speech segments are loaded into a transcription interface that enables a human transcriber to simply listen and type, obviating the need for manually finding and segmenting speech or explicitly controlling audio playback. As a result, playback stays synchronized to the transcriber's speed of transcription. In evaluations using naturalistic audio recordings made in everyday home situations, the new method is up to 6 times faster than other popular transcription tools while preserving transcription quality

    A longitudinal study of prosodic exaggeration in child-directed speech

    Get PDF
    We investigate the role of prosody in child-directed speech of three English speaking adults using data collected for the Human Speechome Project, an ecologically valid, longitudinal corpus collected from the home of a family with a young child. We looked at differences in prosody between child-directed and adult-directed speech. We also looked at the change in prosody of child-directed speech as the child gets older. Results showed significant interactions between speech type and vowel duration, mean F0 and F0 range. We also found significant changes in prosody in child-directed speech as the child gets older

    A Human-Machine Collaborative System for Identifying Rumors on Twitter

    Get PDF
    The spread of rumors on social media, especially in time-sensitive situations such as real-world emergencies, can have harmful effects on individuals and society. In this work, we developed a human-machine collaborative system on Twitter for fast identification of rumors about real-world events. The system reduces the amount of information that users have to sift through in order to identify rumors about real-world events by several orders of magnitude

    Exploiting feature dynamics for active object recognition

    Get PDF
    This paper describes a new approach to object recognition for active vision systems that integrates information across multiple observations of an object. The approach exploits the order relationship between successive frames to derive a classifier based on the characteristic motion of local features across visual sweeps. This motion model reveals structural information about the object that can be exploited for recognition. The main contribution of this paper is a recognition system that extends invariant local features (shape contexts) into the time domain by integration of a motion model. Evaluations on one standardized and one custom collected dataset from the humanoid robot in our laboratory demonstrate that the motion model allows higher-quality hypotheses about object categories quicker than a baseline system that treats object views as unordered streams of images

    An automatic child-directed speech detector for the study of child language development

    Get PDF
    http://interspeech2012.org/accepted-abstract.html?id=210In this paper, we present an automatic child-directed speech detection system to be used in the study of child language development. Child-directed speech (CDS) is speech that is directed by caregivers towards infants. It is not uncommon for corpora used in child language development studies to have a combination of CDS and non-CDS. As the size of the corpora used in these studies grow, manual annotation of CDS becomes impractical. Our automatic CDS detector addresses this issue. The focus of this paper is to propose and evaluate different sets of features for the detection of CDS, using several offthe-shelf classifiers. First, we look at the performance of a set of acoustic features. We continue by combining these acoustic features with several linguistic and eventually contextual features. Using the full set of features, our CDS detector was able to correctly identify CDS with an accuracy of.88 and F1 score of.87 using Naive Bayes. Index Terms: motherese, automatic, child-directed speech, infant-directed speech, adult-directed speech, prosody, language development

    Automatic Estimation of Transcription Accuracy and Difficulty

    Get PDF
    Managing a large-scale speech transcription task with a team of human transcribers requires effective quality control and workload distribution. As it becomes easier and cheaper to collect massive audio corpora the problem is magnified. Relying on expert review or transcribing all speech multiple times is impractical. Furthermore, speech that is difficult to transcribe may be better handled by a more experienced transcriber or skipped entirely. We present a fully automatic system to address these issues. First, we use the system to estimate transcription accuracy from a a single transcript and show that it correlates well with intertranscriber agreement. Second, we use the system to estimate the transcription “difficulty” of a speech segment and show that it is strongly correlated with transcriber effort. This system can help a transcription manager determine when speech segments may require review, track transcriber performance, and efficiently manage the transcription process

    Grounding language models in spatiotemporal context

    Get PDF
    Natural language is rich and varied, but also highly structured. The rules of grammar are a primary source of linguistic regularity, but there are many other factors that govern patterns of language use. Language models attempt to capture linguistic regularities, typically by modeling the statistics of word use, thereby folding in some aspects of grammar and style. Spoken language is an important and interesting subset of natural language that is temporally and spatially grounded. While time and space may directly contribute to a speaker’s choice of words, they may also serve as indicators for communicative intent or other contextual and situational factors. To investigate the value of spatial and temporal information, we build a series of language models using a large, naturalistic corpus of spatially and temporally coded speech collected from a home environment. We incorporate this extralinguistic information by building spatiotemporal word classifiers that are mixed with traditional unigram and bigram models. Our evaluation shows that both perplexity and word error rate can be significantly improved by incorporating this information in a simple framework. The underlying principles of this work could be applied in a wide range of scenarios in which temporal or spatial information is available

    Anisotropic strange stars in Tolman-Kuchowicz spacetime

    Full text link
    We attempt to study a singularity-free model for the spherically symmetric anisotropic strange stars under Einstein's general theory of relativity by exploiting the Tolman-Kuchowicz metric. Further, we have assumed that the cosmological constant Λ\Lambda is a scalar variable dependent on the spatial coordinate rr. To describe the strange star candidates we have considered that they are made of strange quark matter (SQM) distribution, which is assumed to be governed by the MIT bag equation of state. To obtain unknown constants of the stellar system we match the interior Tolman-Kuchowicz metric to the exterior modified Schwarzschild metric with the cosmological constant, at the surface of the system. Following Deb et al. we have predicted the exact values of the radii for different strange star candidates based on the observed values of the masses of the stellar objects and the chosen parametric values of the Λ\Lambda as well as the bag constant B\mathcal{B}. The set of solutions satisfies all the physical requirements to represent strange stars. Interestingly, our study reveals that as the values of the Λ\Lambda and B\mathcal{B} increase the anisotropic system becomes gradually smaller in size turning the whole system into a more compact ultra-dense stellar object.Comment: 18 pages, 10 figure

    Grounding spatial prepositions for video search

    Get PDF
    Spatial language video retrieval is an important real-world problem that forms a test bed for evaluating semantic structures for natural language descriptions of motion on naturalistic data. Video search by natural language query requires that linguistic input be converted into structures that operate on video in order to find clips that match a query. This paper describes a framework for grounding the meaning of spatial prepositions in video. We present a library of features that can be used to automatically classify a video clip based on whether it matches a natural language query. To evaluate these features, we collected a corpus of natural language descriptions about the motion of people in video clips. We characterize the language used in the corpus, and use it to train and test models for the meanings of the spatial prepositions "to," "across," "through," "out," "along," "towards," and "around." The classifiers can be used to build a spatial language video retrieval system that finds clips matching queries such as "across the kitchen."United States. Office of Naval Research (MURI N00014-07-1-0749
    • …
    corecore